When building state-of-the-art speech translation models, the need for large computational resources is a significant obstacle due to the large training data size and complex models. The availability of pre-trained models is a promising opportunity to build strong speech translation systems efficiently. In a first step, we investigate efficient strategies to build cascaded and end-to-end speech translation systems based on pre-trained models. Using this strategy, we can train and apply the models on a single GPU. While the end-to-end models show superior translation performance to cascaded ones, the application of this technology has a limitation on the need for additional end-to-end training data. In a second step, we proposed an additional similarity loss to encourage the model to generate similar hidden representations for speech and transcript. Using this technique, we can increase the data efficiency and improve the translation quality by 6 BLEU points in scenarios with limited end-to-end training data.
translated by 谷歌翻译
跨核心联合学习(FL)已成为医疗保健机器学习应用程序中有前途的工具。它允许医院/机构在数据私有时使用足够的数据培训模型。为了确保FL模型在FL客户之间面对异质数据时,大多数努力都集中在为客户个性化模型上。但是,客户数据之间的潜在关系被忽略了。在这项工作中,我们专注于一个特殊的非IID FL问题,称为域混合FL,其中每个客户的数据分布都被认为是几个预定域的混合物。认识到域的多样性和域内的相似性,我们提出了一种新颖的方法Feddar,该方法以脱钩的方式学习了域共享表示形式和域名个性化的预测头。对于简化的线性回归设置,我们从理论上证明了Feddar具有线性收敛速率。对于一般环境,我们对合成和现实世界医学数据集进行了深入的经验研究,这些研究表明了其优越性比先前的FL方法。
translated by 谷歌翻译
我们研究了随机游戏(SGS)的梯度播放算法的性能,其中每个代理商试图通过基于代理之间共享的当前状态信息来独立做出决策来最大限度地提高自己的总折扣奖励。通过在给定状态下选择某个动作的概率来直接参数化策略。我们展示了纳什均衡(NES)和一阶固定政策在此设置中等同,并在严格的NES周围给出局部收敛速度。此外,对于称为马尔可夫潜在游戏的SGS的子类(包括具有重要特殊情况的代理中具有相同奖励的协作设置),我们设计了一种基于样本的增强学习算法,并为两者提供非渐近全局收敛速度分析精确的梯度游戏和我们基于样本的学习算法。我们的结果表明,迭代的数量达到$ \ epsilon $ -Ne线性缩放,而不是指数级,而代理人数。还考虑了局部几何和局部稳定性,在那里我们证明严格的NE是总潜在功能的局部最大值,完全混合的NE是鞍点。
translated by 谷歌翻译
Neuroevolution has greatly promoted Deep Neural Network (DNN) architecture design and its applications, while there is a lack of methods available across different DNN types concerning both their scale and performance. In this study, we propose a self-adaptive neuroevolution (SANE) approach to automatically construct various lightweight DNN architectures for different tasks. One of the key settings in SANE is the search space defined by cells and organs self-adapted to different DNN types. Based on this search space, a constructive evolution strategy with uniform evolution settings and operations is designed to grow DNN architectures gradually. SANE is able to self-adaptively adjust evolution exploration and exploitation to improve search efficiency. Moreover, a speciation scheme is developed to protect evolution from early convergence by restricting selection competition within species. To evaluate SANE, we carry out neuroevolution experiments to generate different DNN architectures including convolutional neural network, generative adversarial network and long short-term memory. The results illustrate that the obtained DNN architectures could have smaller scale with similar performance compared to existing DNN architectures. Our proposed SANE provides an efficient approach to self-adaptively search DNN architectures across different types.
translated by 谷歌翻译
由于MRI体积的强度在各机构之间是不一致的,因此必须将多模式MRI的通用特征提取到精确分段脑肿瘤。在这个概念中,我们提出了一个体积视觉变压器,遵循两种窗口策略,以提取精美特征和局部分配平滑度(LDS)在受虚拟对手训练(VAT)启发的模型训练过程中提取精美的特征和局部分配平滑度(LDS),以使模型可靠。我们在FETS Challenge 2022数据集上培训和评估了网络体系结构。我们在在线验证数据集上的性能如下:骰子相似性得分为81.71%,91.38%和85.40%; Hausdorff距离(95%)的14.81毫米,3.93毫米,11.18毫米,分别用于增强肿瘤,整个肿瘤和肿瘤核。总体而言,实验结果通过在每个肿瘤子区域的分割准确性中得出更好的性能来验证我们的方法的有效性。我们的代码实施公开可用:https://github.com/himashi92/vizviva_fets_2022
translated by 谷歌翻译
由于组织和骨骼之间的相似性,在人解剖结构中广泛看到了全球相关性。由于近距离质子密度和T1/T2参数,这些相关性反映在磁共振成像(MRI)扫描中。此外,为了实现加速的MRI,k空间数据的采样不足,从而导致全球混叠伪像。卷积神经网络(CNN)模型被广泛用于加速MRI重建,但是由于卷积操作的固有位置,这些模型在捕获全球相关性方面受到限制。基于自发的变压器模型能够捕获图像特征之间的全局相关性,但是,变压器模型对MRI重建的当前贡献是微小的。现有的贡献主要提供CNN转换器混合解决方案,并且很少利用MRI的物理学。在本文中,我们提出了一种基于物理的独立(无卷积)变压器模型,标题为“多头级联SWIN变压器(MCSTRA),用于加速MRI重建。 MCSTRA将几种相互关联的MRI物理相关概念与变压器网络相结合:它通过移动的窗口自我发场机制利用了全局MR特征;它使用多头设置分别提取属于不同光谱组件的MR特征;它通过级联的网络在中间脱氧和K空间校正之间进行迭代,该网络具有K空间和中间损耗计算中的数据一致性;此外,我们提出了一种新型的位置嵌入生成机制,以使用对应于底面采样掩码的点扩散函数来指导自我发作。我们的模型在视觉上和定量上都大大优于最先进的MRI重建方法,同时描述了改善的分辨率和去除词法。
translated by 谷歌翻译
本文提出了基于对脑肿瘤细分任务的普遍学习培训方法。在这一概念中,3D分割网络从双互惠对抗性学习方法学习。为了增强分割预测的概括并使分割网络稳健,我们通过在原始患者数据上添加一些噪声来通过增加一些噪声来遵循虚拟的对抗训练方法。通过将其作为定量主观裁判的评论者纳入了批评,分割网络从与分段结果相关的不确定性信息学习。我们在RSNA-ASNR-MICCAI BRATS 2021数据集上培训和评估网络架构。我们在线验证数据集的表现如下:骰子相似度得分为81.38%,90.77%和85.39%; Hausdorff距离(95±95±95毫米)分别为增强肿瘤,全肿瘤和肿瘤核心的5.37毫米,8.56毫米。同样,我们的方法实现了84.55%,90.46%和85.30%的骰子相似度得分,以及最终测试数据集上的13.48 mm,6.32毫米和16.98mm的Hausdorff距离(95 \%)。总体而言,我们所提出的方法对每个肿瘤次区域的分割准确性产生更好的性能。我们的代码实现在https://github.com/himashi92/vizviva_brats_2021上公开使用
translated by 谷歌翻译
本文提出了一种用于体积医学图像分割的变压器架构。设计用于体积分割的计算高效的变压器架构是一个具有挑战性的任务。它需要在编码本地和全局空间线索中保持复杂的平衡,并沿着体积数据的所有轴保留信息。所提出的体积变压器具有U形编码器解码器设计,其整体处理输入体素。我们的编码器具有两个连续的自我注意层,同时编码本地和全球性提示,我们的解码器具有基于新颖的并联窗口的自我和跨关注块,以通过归类傅立叶位置编码来捕获边界改进的精细细节。我们所提出的设计选择导致计算上有效的架构,其表明脑肿瘤分割(BRATS)2021的有希望的结果,以及用于肿瘤细分的医学分割牌照(胰腺和肝脏)数据集。我们进一步表明,我们的模型在数据集中传输了更好的地传输的表示,并且对数据损坏具有稳健性。 \ href {https://github.com/himashi92/vt-unet} {我们的代码实现是公开可用的}。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译